21 research outputs found

    Decoding of MSTd Population Activity Accounts for Variations in the Precision of Heading Perception

    Get PDF
    SummaryHumans and monkeys use both vestibular and visual motion (optic flow) cues to discriminate their direction of self-motion during navigation. A striking property of heading perception from optic flow is that discrimination is most precise when subjects judge small variations in heading around straight ahead, whereas thresholds rise precipitously when subjects judge heading around an eccentric reference. We show that vestibular heading discrimination thresholds in both humans and macaques also show a consistent, but modest, dependence on reference direction. We used computational methods (Fisher information, maximum likelihood estimation, and population vector decoding) to show that population activity in area MSTd predicts the dependence of heading thresholds on reference eccentricity. This dependence arises because the tuning functions for most neurons have a steep slope for directions near straight forward. Our findings support the notion that population activity in extrastriate cortex limits the precision of both visual and vestibular heading perception

    Perceptual Learning Reduces Interneuronal Correlations in Macaque Visual Cortex

    Get PDF
    SummaryResponses of neurons in early visual cortex change little with training and appear insufficient to account for perceptual learning. Behavioral performance, however, relies on population activity, and the accuracy of a population code is constrained by correlated noise among neurons. We tested whether training changes interneuronal correlations in the dorsal medial superior temporal area, which is involved in multisensory heading perception. Pairs of single units were recorded simultaneously in two groups of subjects: animals trained extensively in a heading discrimination task, and “naive” animals that performed a passive fixation task. Correlated noise was significantly weaker in trained versus naive animals, which might be expected to improve coding efficiency. However, we show that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded. Thus, global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning

    Neural correlates of reliability-based cue weighting during multisensory integration

    No full text
    Integration of multiple sensory cues is essential for precise and accurate perception and behavioral performance, yet the reliability of sensory signals can vary across modalities and viewing conditions. Human observers typically employ the optimal strategy of weighting each cue in proportion to its reliability, but the neural basis of this computation remains poorly understood. We trained monkeys to perform a heading discrimination task from visual and vestibular cues, varying cue reliability randomly. The monkeys appropriately placed greater weight on the more reliable cue, and population decoding of neural responses in the dorsal medial superior temporal area closely predicted behavioral cue weighting, including modest deviations from optimality. We found that the mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits. These results provide direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference

    Spatiotemporal Properties of Vestibular Responses in Area MSTd

    No full text
    Recent studies have shown that many neurons in the primate dorsal medial superior temporal area (MSTd) show spatial tuning during inertial motion and that these responses are vestibular in origin. Given their well-studied role in processing visual self-motion cues (i.e., optic flow), these neurons may be involved in the integration of visual and vestibular signals to facilitate robust perception of self-motion. However, the temporal structure of vestibular responses in MSTd has not been characterized in detail. Specifically, it is not known whether MSTd neurons encode velocity, acceleration, or some combination of motion parameters not explicitly encoded by vestibular afferents. In this study, we have applied a frequency-domain analysis to single-unit responses during translation in three dimensions (3D). The analysis quantifies the stimulus-driven temporal modulation of each response as well as the degree to which this modulation reflects the velocity and/or acceleration profile of the stimulus. We show that MSTd neurons signal a combination of velocity and acceleration components with the velocity component being stronger for most neurons. These two components can exist both within and across motion directions, although their spatial tuning did not show a systematic relationship across the population. From these results, vestibular responses in MSTd appear to show characteristic features of spatiotemporal convergence, similar to previous findings in the brain stem and thalamus. The predominance of velocity encoding in this region may reflect the suitability of these signals to be integrated with visual signals regarding self-motion perception
    corecore